Thanks for the test. I've now removed the link to U10.
In my D3D11 version, I got good results in VR recently with the Rift but for some reason the OpenVR version does not work well. I'm looking into that. If I can fix it, I don't know if the results might help the Public version but I will check that.
Thanks for the test. LFS with D3D9 is quite heavy on the CPU as all the graphics and physics are loaded onto a single core, so there's no benefit from your 8 cores. I think the real solution is the new D3D11 version but unfortunately I can't give any estimate for when that will be released.
I'm taking this opportunity to update the OpenVR implementation for the D3D11 version. I might update to the latest OpenVR. I'll post here if I find anything or think of anything worth trying in the public version.
Thanks for the test. It's odd that a higher end computer can perform worse with LFS in VR. I can't come up with a sensible suggestion yet, but we can try another stab in the dark.
Attached to this post is a new LFSOpenVR.dll which works a bit differently.
How to use it:
- Rename the existing one in your LFS\dll folder
- Save this one there then start LFS
This DLL copies the render target submitted by LFS into an intermediate texture each frame before submitting it to the VR system. It's more like the way it works with the Rift. I don't really expect it to help but think it's worth a go.
You can use this DLL with the U9 or U10 patch. I guess U9 is a better choice as the thread-based submit probably does some good.
I've tried that small change in a test patch now.
It also includes an experimental VR test so it's not the 'official' test patch yet, but you can find it here: https://www.lfs.net/forum/post/1954788#post1954788
Thanks for the logs. I see LFS resolution adjustment is set to 100% (the "RT size" line near the start is the same as the "size" line near the end).
In the case with SteamVR set to 100% your pre-distortion render target size is 4032x2240. The reason these are bigger than the physical screen resolution is because of the distortion: all parts of the render target further out from the eye centre points are deliberately squashed when they are mapped to the screens inside the headset, to compensate for the lens distortion.
If LFS was set to 225% at this time, that would multiply both X and Y by 1.5 so the render target in-game would be 6048x3360 which is really a massive image to render. It's interesting that you say that doesn't seem to harm performance much. I suppose the graphics card is very powerful and has a lot of memory.
So it looks like the issues really come from somewhere else. Probably the way I've implemented the D3D9->D3D11 texture sharing.
I've tried a small change in 0.6U10 and would like to know if it affects your frame dropping problems at all. It now avoids the use of a thread to submit the frame. The thread was meant to allow LFS to continue processing without waiting for the render to finish. But maybe it causes other problems. It's worth a quick test.
[EDIT: removed link to U10 patch - not recommended]
Changes from 0.6U9 to 0.6U10:
VR test:
- Different method of submitting frame to VR
- Avoiding the use of a thread to submit image
- It's possible this may avoid dropped frames
View setup:
- LFS now assumes 3 screens when aspect ratio is 4:1
- previously assumed 3 screens when aspect ratio was 3:1
- this improves support for ultrawide monitors
Some more updated translations - thank you translators!
DOWNLOAD:
IF YOU ALREADY HAVE 0.6U: PATCH 0.6U TO 0.6U10 (SELF EXTRACTING ARCHIVE)
[EDIT: removed link to U10 patch - not recommended]
I don't really know in the future but in the short and medium timeframe that seems pretty much impossible. I have quite a few things to get done for the release. To learn another graphics system and support it side by side with D3D11 would be a mistake. Even if it might be interesting from a programming viewpoint, I expect it would take months to do. I've just done three months of very intensive work this year while converting to D3D11 (although I did some other things too - not all that time was on the port) so the thought of another port is not something I can imagine now!
I read that D3D12 has similarities with Vulkan so if we are still developing LFS in quite a few years from now, I guess it would be time to make a decision between D3D12 and Vulkan.
It's good to hear that D3D11 still can work on Linux, because I don't want to restrict users to Windows. But I'm not so pleased to hear that D3D11 support requires a DX12/Vulkan GPU.
I don't have any plans to set up a Linux computer with a super graphics card to test DXVK - we don't have those kind of resources! As in space, time, money, etc. But hearing all those other games run fine, I would be surprised if LFS didn't.
Thanks for the test. It's interesting that the 'unequal screen widths' can be used to adjust the interface, even when you set the number of left and right side screens to zero.
I think that can be used as a temporary solution but I should try changing the 3-screen detection to 4:1 instead of 3:1 in a test patch, as discussed.
Maybe make sure it really is 0.5F2 (bottom of entry screen) and try again now. Just in case there was any temporary problem with the master server when you tried before.
Yes, the CPU sky is for:
- ambient lighting spherical harmonic (for directional ambient lighting)
- maximum brightness value (needed before the GPU sky is generated - see below)
- average sky colour (I'm trying to move away from any uses of this)
The GPU sky can still be stored as a 32-bit SRGB texture, doesn't need to be 64-bit HDR if I know the maximum value (from the CPU) before it is generated on the GPU. Each pixel is then multiplied by (1.0 / max_brightness) so the sky uses the full 24-bit range of colours.
I'd sometimes like to be able to generate things on the GPU and read the texture back into system memory for analysis by the program, and I do this, not in realtime, in a few places:
- baked ambient and artificial lighting render
- path-based echo map
- path-based ambient lighting render for cars (called 'lightmap' in LFS)
- path-based occlusion test (called 'optimiser' in LFS)
- calculating average colours from textures
But as far as I know, reading back the texture from GPU to CPU will always cause a small glitch or hesitation, because the CPU must stop and wait, when it calls GetRenderTargetData, until the GPU is ready to send the data. I'm not sure if this is the case with later versions of DirectX - I think later versions are better for getting data back from the GPU because of compute shaders and so on. But I think this is a limitation with DX9 so I've avoided using that in real time situations. Anyway the CPU sky at 64x64 only takes 3 milliseconds in the debug build, and as it's on that separate thread there is no glitch at all, so I think it's quite good now. The lighting system judges that the sun has moved enough to need a new sky, asks the CPU sky thread to make a new sky, then when the CPU sky is generated a few ms later, the main thread tells the GPU to generate the new sky (seems to take no time at all).
In the interests of explaining how development goes...
Warning, totally technical and not interesting for most people!
Most of the day was on restructuring the generated sky system a bit and removing some old code that is no longer needed so that yesterday's test has now become proper code to stay instead of being rapidly hacked in for test purposes. Although generating the sky on the GPU instead of the CPU, which is much faster, there was still a quite perceptible glitch long enough to cause a click in the sound each time a new sky was generated - that can be about once every 20 seconds around sunset / sunrise. The click was in my debug version of LFS and might not have happened in the release build but it's best to make it glitch free in the debug version, then it's sure to be OK in the release build, and also won't annoy me all the time while developing.
The glitch turned out to be because I started a new thread to generate the sky each time. A 'CPU sky' still needs to be generated for reasons related to lighting (in addition to the GPU sky) but now the CPU sky can be much smaller in size because it is not for direct display. Still, it's good to use a separate thread to do it to make use of our multiple core CPUs. The solution to the glitch was to have a sky generation thread running the whole time, just waiting for the command from the weather system to generate a new sky texture when required. It turns out that starting and stopping threads is quite expensive in debug builds.
While testing the updated sky I was driving around South City at sunset and started looking into the remaining white pixels, now that the worst offenders were cured by bounding the fog exponential. I applied the _centroid interpolation modifier to various inputs to the pixel shader until I located the one that was causing the white pixels. It turned out to be the ambient lighting data from artificial lights. Now I can drive around South City or visit those camera locations at Blackwood without seeing any bright pixels at all.
So, a good day's work, though not the sort of thing that sounds very exciting to most people.
Thanks for the test. In free view mode the camera continues to move tiny amounts because of the camera smoothing system, so this can cause flicker if there's something prone to flickering in the view. I see on some of the railings what looks like a missing line that is sensitive to camera positions, which I think is related to unshared edges, probably due to triangle sub-splitting for the vertex lighting system. These artifacts seem to be more common with MSAA. I'm getting a lot of them (apparently missing lines) in the distance (near the red pixels) if I switch smoothly between two stored views (at the /cp locations above).
Thanks for the info and link. I searched for the centroid semantic mentioned there and learned about the 'centroid interpolation modifier'. In shader model 2 and 3, the modifier can be added to the TEXCOORD semantic.
I did that and it made the red pixels disappear!
Exact description: In the VS_OUTPUT structure in the test shaders (pixel and vertex) I changed the line: float FOGTEST : TEXCOORD1; // TEST
to: float FOGTEST : TEXCOORD1_centroid; // TEST
So now the value of FOGTEST is calculated at some point within the triangle instead of outside the triangle, so avoiding the extrapolation problem that caused the 'unexpected' values.
Thanks, I'm going to try and see if my car has one of these. I know it adjusts the radio volume as I go faster which is conceptually similar.
There's already fog in public LFS but it is linear. The new version has exponential fog which is closer to reality. I'm not talking about thick fog, just the normal slight visibility reduction due to distance.
Anyway here's another couple of pictures. Note, Eric hasn't worked on the lighting in this version of Blackwood, it's just my test version with the lights switched on and things aren't really balanced correctly yet.
On my side there is still work to do on the graphics. I'm working on a bloom effect right now. I still need to improve the baked lighting rendering system to reduce some ugly vertex lighting issues you don't see too much of in my well-placed screenshots.
Then I still need to work on the physics and improve the security of the master server protocol.
Meanwhile Eric is continuing with South City, and still needs to finish Kyoto and update Fern Bay.
TRM.13: I've moved our discussion about your crash into a separate thread in the bug reports section. That's because it's quite a long discussion now and isn't really related to the test patch (although I'd like to fix it in a test patch). https://www.lfs.net/forum/thread/93703
Detailed live telemetry is now available in a new customisable OutSim packet.
It is a combination of the RAF data and OutSim data. If this might be useful to you, I'd like to hear if there is any more information you would like to receive or any problems you see with this system. It is a first test in this test patch so please expect it to change.
The position of the car is the location of the reference point described in the CAR_info specification.
Enable detailed OutSim by setting the OutSim Opts value in cfg.txt
All extra data can be switched on with the value ff
The new data is documented in the attached header file OutSimPack.h
Maybe I need to add some logging in a test patch, so any reason for starting the exit process is logged as a line in deb.log.
For now, I just wondered what would happen if you start LFS in VR mode. You can do that by creating a shortcut to LFS.exe then add the command line /vr=openvr
So for example the Target line in shortcut properties might be: C:\LFS\LFS.exe /vr=openvr
You can assign the text /vr reset to the wheel button, to replicate F8.
And assign the command /press space to map the space key to a button.
Let me know if that's not clear.
I won't get the VR headset out for a test right now, but I think you are saying that when the text box is visible, the cross hairs are also visible and the space key can then perform one of two functions. Either adding a space character to the text, or performing a 'click' if the cross hairs are over a button, which happens unexpectedly while typing unless you are careful about where you are looking. That does sound quite annoying.
I will try to remember this, I wonder if maybe there could be some kind of solution, without needing to reassign the space bar.
Thanks for the Valve Index feedback. I also have heard from a few other people that it was working fine. I guess you are using the test patch as I see you posted there.
I understand there is the frame rate mismatch. I'm interested to know if you think LFS looks better at 120Hz than at 90Hz.
My understanding is this:
90Hz: there is a 1/100th sec glitch forward every 1/10th of a second (a physics frame is missed 10 times per second).
120Hz: there is a tiny pause every 1/20th of a second (the same physics frame is drawn twice, 20 times per second).
I think these two cases would have a different appearance. Someone said that 120Hz looked better.
For racers who have a Direct Drive wheel, Test Patch U7 has some new settings that allow you to remove a notchy force feeling and receive force updates with the minimum delay. And there is no longer a reason to use frame rates above 100 fps.
For OSW wheels the recommended settings are:
FF Steps -> 10000 (for max resolution allowed by DirectInput)
FF Rate -> 100 Hz (allows FF updates to match LFS physics rate)
I am interested to know if these are the best settings for all direct drive wheels. And to allow for the possibility of automatic presets, it would be useful to know the exact name of your wheel as it appears in LFS (see attached screen shot).
EDIT - test patch download link removed - all these updates are in the latest official version
The update does allow better settings for ordinary geared wheels too, but it is not recommended to go for the maximum. The default settings (Steps 400 Steps / Rate 50 Hz) may be all you need.
New settings are available under Axes / FF in Options - Controls
FF Steps maximum value is now 10000 (the maximum in DirectInput)
FF Rate is now controlled by a user setting (25 / 50 / 100 Hz)
VR:
Names over cars could fade differently in each eye in Pimax headset
Misc:
Gearshift debounce maximum setting restored to 200 ms (default 20)
More translations updated - thank you translators!